218 research outputs found
An Experiment in Ping-Pong Protocol Verification by Nondeterministic Pushdown Automata
An experiment is described that confirms the security of a well-studied class
of cryptographic protocols (Dolev-Yao intruder model) can be verified by
two-way nondeterministic pushdown automata (2NPDA). A nondeterministic pushdown
program checks whether the intersection of a regular language (the protocol to
verify) and a given Dyck language containing all canceling words is empty. If
it is not, an intruder can reveal secret messages sent between trusted users.
The verification is guaranteed to terminate in cubic time at most on a
2NPDA-simulator. The interpretive approach used in this experiment simplifies
the verification, by separating the nondeterministic pushdown logic and program
control, and makes it more predictable. We describe the interpretive approach
and the known transformational solutions, and show they share interesting
features. Also noteworthy is how abstract results from automata theory can
solve practical problems by programming language means.Comment: In Proceedings MARS/VPT 2018, arXiv:1803.0866
Simulation of Two-Way Pushdown Automata Revisited
The linear-time simulation of 2-way deterministic pushdown automata (2DPDA)
by the Cook and Jones constructions is revisited. Following the semantics-based
approach by Jones, an interpreter is given which, when extended with
random-access memory, performs a linear-time simulation of 2DPDA. The recursive
interpreter works without the dump list of the original constructions, which
makes Cook's insight into linear-time simulation of exponential-time automata
more intuitive and the complexity argument clearer. The simulation is then
extended to 2-way nondeterministic pushdown automata (2NPDA) to provide for a
cubic-time recognition of context-free languages. The time required to run the
final construction depends on the degree of nondeterminism. The key mechanism
that enables the polynomial-time simulations is the sharing of computations by
memoization.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
A categorical foundation for structured reversible flowchart languages: Soundness and adequacy
Structured reversible flowchart languages is a class of imperative reversible
programming languages allowing for a simple diagrammatic representation of
control flow built from a limited set of control flow structures. This class
includes the reversible programming language Janus (without recursion), as well
as more recently developed reversible programming languages such as R-CORE and
R-WHILE.
In the present paper, we develop a categorical foundation for this class of
languages based on inverse categories with joins. We generalize the notion of
extensivity of restriction categories to one that may be accommodated by
inverse categories, and use the resulting decisions to give a reversible
representation of predicates and assertions. This leads to a categorical
semantics for structured reversible flowcharts, which we show to be
computationally sound and adequate, as well as equationally fully abstract with
respect to the operational semantics under certain conditions
Reversible Programming: A Case Study of Two String-Matching Algorithms
String matching is a fundamental problem in algorithm. This study examines
the development and construction of two reversible string-matching algorithms:
a naive string-matching algorithm and the Rabin-Karp algorithm. The algorithms
are used to introduce reversible computing concepts, beginning from basic
reversible programming techniques to more advanced considerations about the
injectivization of the polynomial hash-update function employed by the
Rabin-Karp algorithm. The results are two clean input-preserving reversible
algorithms that require no additional space and have the same asymptotic time
complexity as their classic irreversible originals. This study aims to
contribute to the body of reversible algorithms and to the discipline of
reversible programming.Comment: In Proceedings HCVS/VPT 2022, arXiv:2211.1067
An Experiment Combining Specialization with Abstract Interpretation
It was previously shown that control-flow refinement can be achieved by a
program specializer incorporating property-based abstraction, to improve
termination and complexity analysis tools. We now show that this purpose-built
specializer can be reconstructed in a more modular way, and that the previous
results can be achieved using an off-the-shelf partial evaluation tool, applied
to an abstract interpreter. The key feature of the abstract interpreter is the
abstract domain, which is the product of the property-based abstract domain
with the concrete domain. This language-independent framework provides a
practical approach to implementing a variety of powerful specializers, and
contributes to a stream of research on using interpreters and specialization to
achieve program transformations.Comment: In Proceedings VPT/HCVS 2020, arXiv:2008.0248
Multi-level Contextual Type Theory
Contextual type theory distinguishes between bound variables and
meta-variables to write potentially incomplete terms in the presence of
binders. It has found good use as a framework for concise explanations of
higher-order unification, characterize holes in proofs, and in developing a
foundation for programming with higher-order abstract syntax, as embodied by
the programming and reasoning environment Beluga. However, to reason about
these applications, we need to introduce meta^2-variables to characterize the
dependency on meta-variables and bound variables. In other words, we must go
beyond a two-level system granting only bound variables and meta-variables.
In this paper we generalize contextual type theory to n levels for arbitrary
n, so as to obtain a formal system offering bound variables, meta-variables and
so on all the way to meta^n-variables. We obtain a uniform account by
collapsing all these different kinds of variables into a single notion of
variabe indexed by some level k. We give a decidable bi-directional type system
which characterizes beta-eta-normal forms together with a generalized
substitution operation.Comment: In Proceedings LFMTP 2011, arXiv:1110.668
An Automatic Program Generator for Multi-Level Specialization
Program specialization can divide a computation into several computation stages. This paper investigates the theoretical limitations and practical problems of standard specialization tools, presents multi-level specialization, and demonstrates that, in combination with the cogen approach, it is far more practical than previously supposed. The program generator which we designed and implemented for a higher-order functional language converts programs into very compact multi-level generating extensions that guarantee fast successive specialization. Experimental results show a remarkable reduction of generation time and generator size compared to previous attempts of multi-level specialization by self-application. Our approach to multi-level specialization seems well-suited for applications where generation time and program size are critical
The role of leptomeningeal collaterals in redistributing blood flow during stroke.
Leptomeningeal collaterals (LMCs) connect the main cerebral arteries and provide alternative pathways for blood flow during ischaemic stroke. This is beneficial for reducing infarct size and reperfusion success after treatment. However, a better understanding of how LMCs affect blood flow distribution is indispensable to improve therapeutic strategies. Here, we present a novel in silico approach that incorporates case-specific in vivo data into a computational model to simulate blood flow in large semi-realistic microvascular networks from two different mouse strains, characterised by having many and almost no LMCs between middle and anterior cerebral artery (MCA, ACA) territories. This framework is unique because our simulations are directly aligned with in vivo data. Moreover, it allows us to analyse perfusion characteristics quantitatively across all vessel types and for networks with no, few and many LMCs. We show that the occlusion of the MCA directly caused a redistribution of blood that was characterised by increased flow in LMCs. Interestingly, the improved perfusion of MCA-sided microvessels after dilating LMCs came at the cost of a reduced blood supply in other brain areas. This effect was enhanced in regions close to the watershed line and when the number of LMCs was increased. Additional dilations of surface and penetrating arteries after stroke improved perfusion across the entire vasculature and partially recovered flow in the obstructed region, especially in networks with many LMCs, which further underlines the role of LMCs during stroke
- …